虽然现有的脸部防欺骗(FAS)方法在域内实验中实现高精度,但由于普遍性较差,它们的效果严重陷入跨域情景。最近,已经探索了多种技巧,例如域泛化和代表性解剖。然而,改进仍然有限有两个问题:1)很难将所有面向共享特征空间的所有面。如果来自未知域的面不映射到共享特征空间中的已知区域,则会意外地获得不准确的预测。 2)很难完全考虑用于解剖学的各种欺骗痕迹。在本文中,我们提出了一个特征生成和假设验证框架来缓解两个问题。最重要的是,在FAS任务中第一次引入生成真实面和已知攻击的假设的特征生成网络。随后,应用两个假设验证模块来判断输入面是否分别来自真实面积和实体面分布。此外,给出了我们框架和贝叶斯不确定性估计之间关系的一些分析,为未知域中的可靠防御提供了理论支持。实验结果表明,我们的框架实现了有希望的结果,优于最先进的公共数据集的最先进的方法。
translated by 谷歌翻译
Visual place recognition (VPR) is usually considered as a specific image retrieval problem. Limited by existing training frameworks, most deep learning-based works cannot extract sufficiently stable global features from RGB images and rely on a time-consuming re-ranking step to exploit spatial structural information for better performance. In this paper, we propose StructVPR, a novel training architecture for VPR, to enhance structural knowledge in RGB global features and thus improve feature stability in a constantly changing environment. Specifically, StructVPR uses segmentation images as a more definitive source of structural knowledge input into a CNN network and applies knowledge distillation to avoid online segmentation and inference of seg-branch in testing. Considering that not all samples contain high-quality and helpful knowledge, and some even hurt the performance of distillation, we partition samples and weigh each sample's distillation loss to enhance the expected knowledge precisely. Finally, StructVPR achieves impressive performance on several benchmarks using only global retrieval and even outperforms many two-stage approaches by a large margin. After adding additional re-ranking, ours achieves state-of-the-art performance while maintaining a low computational cost.
translated by 谷歌翻译
在3D视觉中,视觉重新定位已被广泛讨论:鉴于预构建的3D视觉图,估计查询图像的6 DOF(自由度)姿势。大规模室内环境中的重新定位可实现有吸引力的应用程序,例如增强现实和机器人导航。但是,当相机移动时,在这种环境中,外观变化很快,这对于重新定位系统来说是具有挑战性的。为了解决这个问题,我们建议一种基于虚拟视图综合方法Rendernet,以丰富有关此特定情况的数据库和完善姿势。我们选择直接渲染虚拟观点的必要全局和本地特征,而不是渲染需要高质量3D模型的真实图像,并分别将它们应用于后续图像检索和功能匹配操作中。所提出的方法在很大程度上可以改善大规模室内环境中的性能,例如,在INLOC数据集中获得7.1 \%和12.2 \%的改善。
translated by 谷歌翻译
由于其捕获远程依赖性的能力,变压器在许多愿景任务中取得了成功。然而,它们的二次计算复杂性构成了将它们应用于需要密集预测的视觉任务的主要障碍,例如对象检测,特征匹配,立体声等。我们引入四叉树的关注,这降低了从二次到线性的计算复杂性。我们的Quadtree变压器构建令牌金字塔,并以粗糙的方式计算注意力。在每个级别,选择具有最高关注分数的顶部K补丁,使得在下一级别,仅关注对应于这些顶部K个补丁的相关区域内。我们表明Quadtree注意在各种视觉任务中实现了最先进的性能,例如,在SCANNET匹配上有4.0%的特征匹配,立体匹配的拖鞋约为50%,提高了Imagenet分类的14-1.5%,对Coco对象检测的提高1.2-1.8%,改进0.7-2.4%以前的最先进变换器的语义分割。该代码可在https://github.com/tangshitao/quadtreeeattention上获得}:htps://github.com/tangshitao/quadtreeattention。
translated by 谷歌翻译
无数据知识蒸馏(DFKD)最近一直吸引了研究社区的越来越关注,归因于其仅使用合成数据压缩模型的能力。尽管取得了令人鼓舞的成果,但最先进的DFKD方法仍然患有数据综合的低效率,使得无数据培训过程非常耗时,因此可以对大规模任务进行不适当的。在这项工作中,我们介绍了一个被称为FastDFKD的有效方案,使我们能够将DFKD加速到数量级。在我们的方法中,我们的方法是一种重用培训数据中共享共同功能的新策略,以便综合不同的数据实例。与先前的方法独立优化一组数据,我们建议学习一个Meta合成器,该综合仪寻求常见功能作为快速数据合成的初始化。因此,FastDFKD仅在几个步骤内实现数据综合,显着提高了无数据培训的效率。在CiFAR,NYUV2和Imagenet上的实验表明,所提出的FastDFKD实现了10美元\时代$甚至100美元\倍$加速,同时保持与现有技术的表现。
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
An increasing number of public datasets have shown a marked clinical impact on assessing anatomical structures. However, each of the datasets is small, partially labeled, and rarely investigates severe tumor subjects. Moreover, current models are limited to segmenting specific organs/tumors, which can not be extended to novel domains and classes. To tackle these limitations, we introduce embedding learned from Contrastive Language-Image Pre-training (CLIP) to segmentation models, dubbed the CLIP-Driven Universal Model. The Universal Model can better segment 25 organs and 6 types of tumors by exploiting the semantic relationship between abdominal structures. The model is developed from an assembly of 14 datasets with 3,410 CT scans and evaluated on 6,162 external CT scans from 3 datasets. We rank first on the public leaderboard of the Medical Segmentation Decathlon (MSD) and achieve the state-of-the-art results on Beyond The Cranial Vault (BTCV). Compared with dataset-specific models, the Universal Model is computationally more efficient (6x faster), generalizes better to CT scans from varying sites, and shows stronger transfer learning performance on novel tasks. The design of CLIP embedding enables the Universal Model to be easily extended to new classes without catastrophically forgetting the previously learned classes.
translated by 谷歌翻译
Recent advances in self-supervised learning (SSL) in computer vision are primarily comparative, whose goal is to preserve invariant and discriminative semantics in latent representations by comparing siamese image views. However, the preserved high-level semantics do not contain enough local information, which is vital in medical image analysis (e.g., image-based diagnosis and tumor segmentation). To mitigate the locality problem of comparative SSL, we propose to incorporate the task of pixel restoration for explicitly encoding more pixel-level information into high-level semantics. We also address the preservation of scale information, a powerful tool in aiding image understanding but has not drawn much attention in SSL. The resulting framework can be formulated as a multi-task optimization problem on the feature pyramid. Specifically, we conduct multi-scale pixel restoration and siamese feature comparison in the pyramid. In addition, we propose non-skip U-Net to build the feature pyramid and develop sub-crop to replace multi-crop in 3D medical imaging. The proposed unified SSL framework (PCRLv2) surpasses its self-supervised counterparts on various tasks, including brain tumor segmentation (BraTS 2018), chest pathology identification (ChestX-ray, CheXpert), pulmonary nodule detection (LUNA), and abdominal organ segmentation (LiTS), sometimes outperforming them by large margins with limited annotations.
translated by 谷歌翻译
We present Muse, a text-to-image Transformer model that achieves state-of-the-art image generation performance while being significantly more efficient than diffusion or autoregressive models. Muse is trained on a masked modeling task in discrete token space: given the text embedding extracted from a pre-trained large language model (LLM), Muse is trained to predict randomly masked image tokens. Compared to pixel-space diffusion models, such as Imagen and DALL-E 2, Muse is significantly more efficient due to the use of discrete tokens and requiring fewer sampling iterations; compared to autoregressive models, such as Parti, Muse is more efficient due to the use of parallel decoding. The use of a pre-trained LLM enables fine-grained language understanding, translating to high-fidelity image generation and the understanding of visual concepts such as objects, their spatial relationships, pose, cardinality etc. Our 900M parameter model achieves a new SOTA on CC3M, with an FID score of 6.06. The Muse 3B parameter model achieves an FID of 7.88 on zero-shot COCO evaluation, along with a CLIP score of 0.32. Muse also directly enables a number of image editing applications without the need to fine-tune or invert the model: inpainting, outpainting, and mask-free editing. More results are available at https://muse-model.github.io
translated by 谷歌翻译
Feature selection helps reduce data acquisition costs in ML, but the standard approach is to train models with static feature subsets. Here, we consider the dynamic feature selection (DFS) problem where a model sequentially queries features based on the presently available information. DFS is often addressed with reinforcement learning (RL), but we explore a simpler approach of greedily selecting features based on their conditional mutual information. This method is theoretically appealing but requires oracle access to the data distribution, so we develop a learning approach based on amortized optimization. The proposed method is shown to recover the greedy policy when trained to optimality and outperforms numerous existing feature selection methods in our experiments, thus validating it as a simple but powerful approach for this problem.
translated by 谷歌翻译